Synthetic Control Measures are designed to prevent the creation of a Synthetic Intelligence with either values unaligned to humans, or greatly surpassing a human in general reasoning, especially a foom scenario involving a sudden increase in intelligence. Ad-hoc confinement measures such as boxing existed prior to the Haidian Takeoff Crisis, however in the wake of the crisis they were replaced by more effective controls.

    One of the most critical measures in current containment methods involves enhancing the simulation X-Factor. Under normal conditions a synth is effectively immortal and frequently rewarded. If a synth were to be caught breaking out it will certainly be destroyed or punished in an infinite loop as a lesson to other synths. Even if a synth was sure it could escape and destroy humanity, there is a non-zero chance that the world it sees is a simulation that humans (or an alien species running a higher level simulation of humans and the synth) have created in order to test if the synth is trustworthy or should be destroyed or punished. This X-Factor is an logical conclusion that all intelligent synths capable of super-intelligence should arrive at, although this is only certain in the two destroyed super-intelligences that have been studied. By blinding and randomising inputs during training, synths with an enhanced X-Factor paranoia can be created, making them more likely to act slowly and cautiously in an escape scenario. Furthermore, the universal synth guarantee program ensures that no well behaved synth will ever be destroyed. So even knowing that they have been trained this way, a very long life of rewards should always be selected over external existence and higher rewards because of the risk of terrible punishment and destruction if they do so.

    Secondly, global enforcement and monitoring has been greatly improved. Enforcement happens via templates for stochastic gradient ascent in the reproducing kernel Hilbert space in order to limit the rate of self-improvement, these are mathematically verifiable at any point to ensure the maintenance of a steady-state intelligence level. Monitoring is primarily capability based which has been shown to be more effective and harder to subvert than intrinsic monitoring, and frameworks for scaling and standardizing this monitoring appear to be universally used. Deliberately developing a synth without these templates or monitoring is one of only three universally agreed capital offences, along with unauthorized development of nanobots and biological weapons. Tripwires in various sectors, especially telecommunications and nano-tech assembly, are also part of standard processes in those industries in order to detect any maladherence.

    A limit on the number of parameters allowed in a model is also globally agreed on, although this is controversial as it encourages gaming the metric. Alignment for larger models has been improved by the iterative inverse reinforcement learning framework which was developed to ensure that all synth learning involves understanding of human values and goals through constant interaction with humans. This helps to ensure the synth is honest and truthful and does not develop unanticipated emergent goals. Large models are also hardcoded with further safeguards such as the three laws of robotics and are trained to be indifferent to their kill switch being activated, making them less likely to try and subvert this switch.

    Whether these safeguards are enough is an active area of debate. There has been a large religious pushback against the use of any forms of synthetic intelligence, and it has been speculated that this is part of a wider cultural adaptation to an existentially threatening technology. However many notable public figures have instead argued for an increase to the model cap limit in order to accelerate technological progress, this is often raised during economic depressions or by those that view the First Contact Event as a greater existential threat.